7 research outputs found

    Group invariance principles for causal generative models

    Full text link
    The postulate of independence of cause and mechanism (ICM) has recently led to several new causal discovery algorithms. The interpretation of independence and the way it is utilized, however, varies across these methods. Our aim in this paper is to propose a group theoretic framework for ICM to unify and generalize these approaches. In our setting, the cause-mechanism relationship is assessed by comparing it against a null hypothesis through the application of random generic group transformations. We show that the group theoretic view provides a very general tool to study the structure of data generating mechanisms with direct applications to machine learning.Comment: 16 pages, 6 figure

    Telling cause from effect in deterministic linear dynamical systems

    Full text link
    Inferring a cause from its effect using observed time series data is a major challenge in natural and social sciences. Assuming the effect is generated by the cause trough a linear system, we propose a new approach based on the hypothesis that nature chooses the "cause" and the "mechanism that generates the effect from the cause" independent of each other. We therefore postulate that the power spectrum of the time series being the cause is uncorrelated with the square of the transfer function of the linear filter generating the effect. While most causal discovery methods for time series mainly rely on the noise, our method relies on asymmetries of the power spectral density properties that can be exploited even in the context of deterministic systems. We describe mathematical assumptions in a deterministic model under which the causal direction is identifiable with this approach. We also discuss the method's performance under the additive noise model and its relationship to Granger causality. Experiments show encouraging results on synthetic as well as real-world data. Overall, this suggests that the postulate of Independence of Cause and Mechanism is a promising principle for causal inference on empirical time series.Comment: This article is under review for a peer-reviewed conferenc

    Justifying Information-Geometric Causal Inference

    Full text link
    Information Geometric Causal Inference (IGCI) is a new approach to distinguish between cause and effect for two variables. It is based on an independence assumption between input distribution and causal mechanism that can be phrased in terms of orthogonality in information space. We describe two intuitive reinterpretations of this approach that makes IGCI more accessible to a broader audience. Moreover, we show that the described independence is related to the hypothesis that unsupervised learning and semi-supervised learning only works for predicting the cause from the effect and not vice versa.Comment: 3 Figure

    On the spectrum of r-orthogonal Latin squares of different orders

    No full text
    ‎Two Latin squares of order n n are orthogonal if in their superposition‎, ‎each of the n 2 n2 ordered pairs of symbols occurs exactly once‎. ‎Colbourn‎, ‎Zhang and Zhu‎, ‎in a series of papers‎, ‎determined the integers r r for which there exist a pair of Latin squares of order n n having exactly r r different ordered pairs in their superposition‎. ‎Dukes and Howell defined the same problem for Latin squares of different orders n n and n+k n+k‎. ‎They obtained a non-trivial lower bound for r r and solved the problem for k≥2n3 k≥2n/3‎. ‎Here for k<2n3 k<2n/3‎, ‎some constructions are shown to realize many values of r r and for small cases (3≤n≤6) (3≤n≤6)‎, ‎the problem has been solved‎
    corecore